computability A hypothetical machine defined in 1935-6 by
consists of an infinitely long "tape" with symbols (chosen
from some
finite set) written at regular intervals. A
pointer marks the current position and the machine is in one
of a finite set of "internal states". At each step the
machine reads the symbol at the current position on the tape.
For each combination of current state and symbol read, a
program specifies the new state and either a symbol to write
to the tape or a direction to move the pointer (left or right)
or to halt.
In an alternative scheme, the machine writes a symbol to the
tape *and* moves at each step. This can be encoded as a write
state followed by a move state for the write-or-move machine.
If the write-and-move machine is also given a distance to move
then it can emulate an write-or-move program by using states
with a distance of zero. A further variation is whether
halting is an action like writing or moving or whether it is a
special state.
[What was Turing's original definition?]
Without loss of generality, the symbol set can be limited to
just "0" and "1" and the machine can be restricted to start on
the leftmost 1 of the leftmost string of 1s with strings of 1s
being separated by a single 0. The tape may be infinite in
one direction only, with the understanding that the machine
will halt if it tries to move off the other end.
be shown to be equivalent to a Turing Machine and thus
equivalent to each other in the sense that any problem that
one can solve, any other can solve given sufficient time and
memory.
Turing generalised the idea of the Turing Machine to a
"Universal Turing Machine" which was programmed to read
instructions, as well as data, off the tape, thus giving rise
to the idea of a general-purpose programmable computing
device. This idea still exists in modern computer design with
low level
microcode which directs the reading and decoding
Dr. Hava Siegelmann of
Technion reported in Science of 28
Apr 1995 that she has found a mathematically rigorous class of
machines, based on ideas from
chaos theory and
neuralnetworks, that are more powerful than Turing Machines. Sir
can compute things that a Turing Machine cannot, which would
mean that it would be impossible to create
artificialintelligence. Dr. Siegelmann's work suggests that this is
true only for conventional computers and may not cover
neuralnetworks.
(1995-05-10)